1,603 research outputs found

    Gaussian Process applied to modeling the dynamics of a deformable material

    Get PDF
    En aquesta tesi, establirem la base teòrica d'alguns algoritmes de reducció de la dimensionalitat com el GPLVM i la seva aplicació a la reproducció d'una sèrie temporal d'observables amb el GPDM i la seva generalització amb control CGPDM. Finalment, anem a introduir un nou model computacionalment més eficient, el MoCGPDM aplicant una barreja d'experts. L'última secció consistirà en afinar el model i comparar-lo amb el model previ.En esta tesis, estableceremos la base teórica de algunos algoritmos de reducción de la dimensionalidad como el GPLVM y su aplicación a la reproducción de una serie temporal de observables con el GPDM y su generalización con control CGPDM. Finalmente, vamos a introducir un nuevo modelo computacionálmente más eficiente, el MoCGPDM aplicando una mezcla de expertos. La última sección consistirá en afinar el modelo y compararlo con el modelo previo.In this thesis, we establish the theoretical basis of some dimensional reduction algorithms like the GPLVM and their application to the reproduction of a time series of observable data with the GPDM and its generalization with control CGPDM. Finally, we are going to introduce a new more time efficient model MoCGPDM applying a mixture of experts. And the final section will consist in fine-tuning the model and compare it to the previous model

    Simulation of Browian Particles in a channel

    Get PDF
    We study the behavior of ions with Brownian motion in the stationary state along a channel and the influence of external applied potentials and shapes of the channel. Our goal is to implement a 1-dimensional model that captures these effects by means of an effective potential and a diffusion coefficient. The method could be used as a tool to reduce the calculation cost of modeling these systems, which otherwise should be modeled in three dimensions.2019/202

    Efficient management of energy systems including storage systems

    Get PDF
    In this master thesis, we are going to work towards a data based control algorithm, that given a complex system and a goal, it can learn how to manipulate the control actions of the system towards achieve that goal. The objective of developing such a control algorithm is to ensure the correct operation of a small smart microgrid with several components that balances the operational costs under optimal conditions and the economical performance of the system. Particularly, in this thesis, we will describe the components that conforms a standard microgrid including Energy Storage Systems (ESS) considering the efficiency and operational limits. Energy Generation Systems that relies on climatic condition cannot be on demand. Consumption hubs, like a regular household, whose consumption of electrical power depends on the daily habits can change from time to time. The composition of a microgrid of a "prosumer " - generally consumers of energy from the elec- trical grid with production and storage capabilities - usually relies in several types of ESS with complementary characteristics like a battery with higher storage capacity and a super capacitor with higher power density. The production must have at least one energy source, that is usually a renewable source, but the combination of several sources could increase the reliability of the system. At this level of management, the system can be considered as a set of ESS whose State of Charge (SOC) can be considered the working space and can be leveled with the control actions. This type of management is traditionally done with an economical criterion, whose optimization requires the definition of an accurate model of the system and the correct parametrization of the goals. We propose a different method that can be used without the previous knowledge of the system dynamics and can simultaneously generate control signals and learn the optimal policy given a parametrized cost function and the sensing of the system states. The proposed set of methods are called Structured Online Learning methods, and relies in two distinctive parts: the System Identification module, that will learn the dynamics of the system given the collected data as a linear combination of non-linear basis functions of the state; and a Value Function learning that will be updated given the learned model and can generate the control signal that is showed optimal in a long term window. As those systems require the definition of a differentiable and convex cost over the control effort, we introduced some assumptions over the operational cost that can adapt the economical problem to an equivalent of a Quadratic Regulator. This will also bring us the opportunity to compare our method with some standard solutions like the Ricatti equation for a Linear Quadratic Regulator. Comparing the standard solutions for the adapted problem that leverages the knowledge of the real model and several versions of the Structured Online Learning algorithm, we can confidently state that the proposed method generates comparable results in a reference tracking problem. But, it can even enhance the economical performance if the generation and consumption profiles has a certain degree of predictabilit

    Mixtures of controlled Gaussian processes for dynamical modeling of deformable objects

    Get PDF
    Control and manipulation of objects is a highly relevant topic in Robotics research. Although significant advances have been made over the manipulation of rigid bodies, the manipulation of non-rigid objects is still challenging and an open problem. Due to the uncertainty of the outcome when applying physical actions to non-rigid objects, using prior knowledge on objects’ dynamics can greatly improve the control performance. However, fitting such models is a challenging task for materials such as clothing, where the state is represented by points in a mesh, resulting in very large dimensionality that makes models difficult to learn, process and predict based on measured data. In this paper, we expand previous work on Controlled Gaussian Process Dynamical Models (CGPDM), a method that uses a non-linear projection of the state space onto a much smaller dimensional latent space, and learns the object dynamics in the latent space. We take advantage of the variability in training data by employing Mixture of Experts (MoE), and we devise theory and experimental validations that demonstrate significant improvements in training and prediction times, plus robustness and error stability when predicting deformable objects exposed to disparate movement ranges.This work was partially developed in the context of the project CLOTHILDE (”CLOTH manIpulation Learning from DEmonstrations”), which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Advanced Grant agreement No 741930). We would like to thank the members of the HCRL Lab and the Department of Aerospace Engineering and Engineering Mechanics at The University of Texas at Austin for their feedback during the development of this work.Peer ReviewedPostprint (published version

    Identification of an ancient mantle reservoir and young recycled materials in the source region of a young mantle plume: Implications for potential linkages between plume and plate tectonics

    Get PDF
    Whether or not mantle plumes and plate subduction are genetically linked is a fundamental geoscience question that impinges on our understanding of how the Earth works. Late Cenozoic basalts in Southeast Asia are globally unique in relation to this question because they occur above a seismically detected thermal plume adjacent to deep subducted slabs. In this study, we present new Pb, Sr, Nd, and Os isotope data for the Hainan flood basalts. Together with a compilation of published results, our work shows that less contaminated basaltic samples from the synchronous basaltic eruptions in Hainan–Leizhou peninsula, the Indochina peninsula and the South China Sea seamounts share the same isotopic and geochemical characteristics. They have FOZO-like Sr, Nd, and Pb isotopic compositions (the dominant lower mantle component). These basalts have primitive Pb isotopic compositions that lie on, or very close to, 4.5- to 4.4-Ga geochrons on 207Pb/204Pb versus 206Pb/204Pb diagram, suggesting a mantle source developed early in Earthʼs history (4.5–4.4 Ga). Furthermore, our detailed geochemical and Sr, Nd, Pb and Os isotopic analyses suggest the presence of 0.5–0.2 Ga recycled components in the late Cenozoic Hainan plume basalts.This implies a mantle circulation rate of >1 cm/yr, which is similar to that of previous estimates for the Hawaiian mantle plume. The identification of the ancient mantle reservoir and young recycled materials in the source region of these synchronous basalts is consistent with the seismically detected lower mantle-rooted Hainan plume that is adjacent to deep subducted slab-like seismic structures just above the core–mantle boundary. We speculate that the continued deep subduction and the presence of a dense segregated basaltic layer may have triggered the plume to rise from the thermal–chemical pile. This work therefore suggests a dynamic linkage between deep subduction and mantle plume generation

    A Survey on In-context Learning

    Full text link
    With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few examples. It has been a new trend to explore ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress and challenges of ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques, including training strategies, demonstration designing strategies, as well as related analysis. Finally, we discuss the challenges of ICL and provide potential directions for further research. We hope that our work can encourage more research on uncovering how ICL works and improving ICL.Comment: Papers collected until 2023/05/2

    Risk Prediction of Second Primary Malignancies in Primary Early-Stage Ovarian Cancer Survivors: A SEER-Based National Population-Based Cohort Study

    Get PDF
    Purpose: This study aimed to characterize the clinical features of early-stage ovarian cancer (OC) survivors with second primary malignancies (SPMs) and provided a prediction tool for individualized risk of developing SPMs. Methods: Data were obtained from the Surveillance, Epidemiology and End Results (SEER) database during 1998–2013. Considering non-SPM death as a competing event, the Fine and Gray model and the corresponding nomogram were used to identify the risk factors for SPMs and predict the SPM probabilities after the initial OC diagnosis. The decision curve analysis (DCA) was performed to evaluate the clinical utility of our proposed model. Results: A total of 14,314 qualified patients were enrolled. The diagnosis rate and the cumulative incidence of SPMs were 7.9% and 13.6% [95% confidence interval (CI) = 13.5% to 13.6%], respectively, during the median follow-up of 8.6 years. The multivariable competing risk analysis suggested that older age at initial cancer diagnosis, white race, epithelial histologic subtypes of OC (serous, endometrioid, mucinous, and Brenner tumor), number of lymph nodes examined (<12), and radiotherapy were significantly associated with an elevated SPM risk. The DCA revealed that the net benefit obtained by our proposed model was higher than the all-screening or no-screening scenarios within a wide range of risk thresholds (1% to 23%). Conclusion: The competing risk nomogram can be potentially helpful for assisting physicians in identifying patients with different risks of SPMs and scheduling risk-adapted clinical management. More comprehensive data on treatment regimens and patient characteristics may help improve the predictability of the risk model for SPMs

    Multi-Feature Collaborative Fusion Network with Deep Supervision for SAR Ship Classification

    Get PDF
    Multi-feature SAR ship classification aims to build models that can process, correlate, and fuse information from both handcrafted and deep features. Although handcrafted features provide rich expert knowledge, current fusion methods inadequately explore the relatively significant role of handcrafted features in conjunction with deep features, the imbalances in feature contributions, and the cooperative ways in which features learn. In this paper, we propose a novel multi-feature collaborative fusion network with deep supervision (MFCFNet) to effectively fuse handcrafted features and deep features for SAR ship classification tasks. Specifically, our framework mainly includes two types of feature extraction branches, a knowledge supervision and collaboration module, and a feature fusion and contribution assignment module. The former module improves the quality of the feature maps learned by each branch through auxiliary feature supervision and introduces a synergy loss to facilitate the interaction of information between deep features and handcrafted features. The latter module utilizes an attention mechanism to adaptively balance the importance among various features and assign the corresponding feature contributions to the total loss function based on the generated feature weights. We conducted extensive experimental and ablation studies on two public datasets, OpenSARShip-1.0 and FUSAR-Ship, and the results show that MFCFNet is effective and outperforms single deep feature and multi-feature models based on previous internal FC layer and terminal FC layer fusion. Furthermore, our proposed MFCFNet exhibits better performance than the current state-of-the-art methods

    ISMAC: An Intelligent System for Customized Clinical Case Management and Analysis

    Get PDF
    Clinical cases are primary and vital evidence for Traditional Chinese Medicine (TCM) clinical research. A great deal of medical knowledge is hidden in the clinical cases of the highly experienced TCM practitioner. With a deep Chinese culture background and years of clinical experience, an experienced TCM specialist usually has his or her unique clinical pattern and diagnosis idea. Preserving huge clinical cases of experienced TCM practitioners as well as exploring the inherent knowledge is then an important but arduous task. The novel system ISMAC (Intelligent System for Management and Analysis of Clinical Cases in TCM) is designed and implemented for customized management and intelligent analysis of TCM clinical data. Customized templates with standard and expert-standard symptoms, diseases, syndromes, and Chinese Medince Formula (CMF) are constructed in ISMAC, according to the clinical diagnosis and treatment characteristic of each TCM specialist. With these templates, clinical cases are archived in order to maintain their original characteristics. Varying data analysis and mining methods, grouped as Basic Analysis, Association Rule, Feature Reduction, Cluster, Pattern Classification, and Pattern Prediction, are implemented in the system. With a flexible dataset retrieval mechanism, ISMAC is a powerful and convenient system for clinical case analysis and clinical knowledge discovery
    corecore